磁共振成像(MRI)是中风成像的中心方式。它被用来接受患者的治疗决定,例如选择患者进行静脉溶栓或血管内治疗。随后在住院期间使用MRI来通过可视化梗塞核心大小和位置来预测结果。此外,它可以用来表征中风病因,例如(心脏) - 栓塞和非胚胎中风之间的区分。基于计算机的自动医疗图像处理越来越多地进入临床常规。缺血性中风病变分割(ISLE)挑战的先前迭代有助于生成鉴定急性和急性缺血性中风病变分割的基准方法。在这里,我们介绍了一个专家注册的多中心MRI数据集,以分割急性到亚急性中风病变。该数据集包括400个多供应商MRI案例,中风病变大小,数量和位置的可变性很高。它分为n = 250的训练数据集和n = 150的测试数据集。所有培训数据将公开可用。测试数据集将仅用于模型验证,并且不会向公众发布。该数据集是Isles 2022挑战的基础,目的是找到算法方法,以实现缺血性中风的稳健和准确分割算法的开发和基准测试。
translated by 谷歌翻译
这项研究提出了一种新的数据库和方法,以检测由于酒精,药物消耗和昏昏欲睡而导致的警报条件的减少,而近亲(NIR)眼球周围眼部图像。该研究的重点是确定外部因素对中枢神经系统(CNS)的影响。目的是分析这如何影响虹膜和学生运动行为,以及是否可以用标准的IRIS NIR捕获装置对这些更改进行分类。本文提出了修改的MobileNetV2,以对来自酒精/药物/嗜睡影响的受试者拍摄的虹膜NIR图像进行分类。结果表明,基于MobileNETV2的分类器可以在耐心等方面从饮酒和药物消耗后捕获的虹膜样品的不合适性条件,分别检测精度分别为91.3%和99.1%。嗜睡状况是最具挑战性的72.4%。对于属于FIT/UNFIT类的两类分组图像,该模型的准确度分别为94.0%和84.0%,使用的参数数量较小,而不是标准的深度学习网络算法。这项工作是开发自动系统以对“适合值班”进行分类并防止因酒精/吸毒和嗜睡而导致事故的生物识别应用程序迈出的一步。
translated by 谷歌翻译
Biometrics is the science of identifying an individual based on their intrinsic anatomical or behavioural characteristics, such as fingerprints, face, iris, gait, and voice. Iris recognition is one of the most successful methods because it exploits the rich texture of the human iris, which is unique even for twins and does not degrade with age. Modern approaches to iris recognition utilize deep learning to segment the valid portion of the iris from the rest of the eye, so it can then be encoded, stored and compared. This paper aims to improve the accuracy of iris semantic segmentation systems by introducing a novel data augmentation technique. Our method can transform an iris image with a certain dilation level into any desired dilation level, thus augmenting the variability and number of training examples from a small dataset. The proposed method is fast and does not require training. The results indicate that our data augmentation method can improve segmentation accuracy up to 15% for images with high pupil dilation, which creates a more reliable iris recognition pipeline, even under extreme dilation.
translated by 谷歌翻译
In this paper, we propose an end-to-end framework that jointly learns keypoint detection, descriptor representation and cross-frame matching for the task of image-based 3D localization. Prior art has tackled each of these components individually, purportedly aiming to alleviate difficulties in effectively train a holistic network. We design a self-supervised image warping correspondence loss for both feature detection and matching, a weakly-supervised epipolar constraints loss on relative camera pose learning, and a directional matching scheme that detects key-point features in a source image and performs coarse-to-fine correspondence search on the target image. We leverage this framework to enforce cycle consistency in our matching module. In addition, we propose a new loss to robustly handle both definite inlier/outlier matches and less-certain matches. The integration of these learning mechanisms enables end-to-end training of a single network performing all three localization components. Bench-marking our approach on public data-sets, exemplifies how such an end-to-end framework is able to yield more accurate localization that out-performs both traditional methods as well as state-of-the-art weakly supervised methods.
translated by 谷歌翻译
In constrained reinforcement learning (C-RL), an agent seeks to learn from the environment a policy that maximizes the expected cumulative reward while satisfying minimum requirements in secondary cumulative reward constraints. Several algorithms rooted in sampled-based primal-dual methods have been recently proposed to solve this problem in policy space. However, such methods are based on stochastic gradient descent ascent algorithms whose trajectories are connected to the optimal policy only after a mixing output stage that depends on the algorithm's history. As a result, there is a mismatch between the behavioral policy and the optimal one. In this work, we propose a novel algorithm for constrained RL that does not suffer from these limitations. Leveraging recent results on regularized saddle-flow dynamics, we develop a novel stochastic gradient descent-ascent algorithm whose trajectories converge to the optimal policy almost surely.
translated by 谷歌翻译
We propose a structure-preserving model-reduction methodology for large-scale dynamic networks with tightly-connected components. First, the coherent groups are identified by a spectral clustering algorithm on the graph Laplacian matrix that models the network feedback. Then, a reduced network is built, where each node represents the aggregate dynamics of each coherent group, and the reduced network captures the dynamic coupling between the groups. We provide an upper bound on the approximation error when the network graph is randomly generated from a weight stochastic block model. Finally, numerical experiments align with and validate our theoretical findings.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
In this research work, we have demonstrated the application of Mask-RCNN (Regional Convolutional Neural Network), a deep-learning algorithm for computer vision and specifically object detection, to semiconductor defect inspection domain. Stochastic defect detection and classification during semiconductor manufacturing has grown to be a challenging task as we continuously shrink circuit pattern dimensions (e.g., for pitches less than 32 nm). Defect inspection and analysis by state-of-the-art optical and e-beam inspection tools is generally driven by some rule-based techniques, which in turn often causes to misclassification and thereby necessitating human expert intervention. In this work, we have revisited and extended our previous deep learning-based defect classification and detection method towards improved defect instance segmentation in SEM images with precise extent of defect as well as generating a mask for each defect category/instance. This also enables to extract and calibrate each segmented mask and quantify the pixels that make up each mask, which in turn enables us to count each categorical defect instances as well as to calculate the surface area in terms of pixels. We are aiming at detecting and segmenting different types of inter-class stochastic defect patterns such as bridge, break, and line collapse as well as to differentiate accurately between intra-class multi-categorical defect bridge scenarios (as thin/single/multi-line/horizontal/non-horizontal) for aggressive pitches as well as thin resists (High NA applications). Our proposed approach demonstrates its effectiveness both quantitatively and qualitatively.
translated by 谷歌翻译
这项工作是在培训生成动作/视频识别模型上,其输出是描述视频的自由形式的特定动作标题(而不是动作类标签)。生成的方法具有实用的优势,例如生产更细粒度和人类可读的产出,并且自然而然地是开放的。为此,我们提议适应视频/动作识别的预先训练的生成视觉和语言(V&L)基础模型。据我们所知,最近有几次尝试适应了用对比度学习(例如剪辑)训练的V&L模型(例如剪辑),但据我们所知,我们提出了第一种设定实现这一目标的方法来实现生成模型的方法。我们首先表明,生成模型的直接微调生产具有严重过度拟合的动作类别。为了减轻这一点,我们介绍了REST,这是一个由两个关键组成部分组成的培训框架:一种无监督的方法,用于通过伪捕获生成和自我训练,将生成模型适应动作/视频,即不使用任何动作特定的标签; (b)基于剪辑的检索方法,用于为每个视频发现一套伪装的伪扣,以训练该模型。重要的是,我们表明这两个组件对于获得高精度都是必要的。我们评估零拍动识别的问题的休息,我们表明,与基于对比的学习方法相比,我们的方法非常有竞争力。代码将可用。
translated by 谷歌翻译
嗜睡是驾驶员和交通事故主要原因之一的主要关注点。认知神经科学和计算机科学的进步已通过使用脑部计算机界面(BCIS)和机器学习(ML)来检测驾驶员的嗜睡。然而,几个挑战仍然开放,应该面对。首先,文献中缺少使用一组ML算法的多种ML算法对嗜睡检测性能的全面评估。最后,需要研究适合受试者组的可扩展ML模型的检测性能,并将其与文献中提出的单个模型进行比较。为了改善这些局限性,这项工作提出了一个智能框架,该框架采用了BCIS和基于脑电图(EEG)的功能,以检测驾驶场景中的嗜睡。 SEED-VIG数据集用于喂食不同的ML回归器和三类分类器,然后评估,分析和比较单个受试者和组的表现最佳模型。有关单个模型的更多详细信息,随机森林(RF)获得了78%的F1分数,改善了通过文献中使用的模型(例如支持向量机(SVM))获得的58%。关于可扩展模型,RF达到了79%的F1得分,证明了这些方法的有效性。所学的经验教训可以总结如下:i)不仅SVM,而且文献中未充分探索的其他模型与嗜睡检测有关,ii)ii)适用于受试者组的可伸缩方法也有效地检测嗜睡,即使新受试者也是如此评估模型培训中未包括的。
translated by 谷歌翻译